Application of Computation to Life Sciences Problems throughout the Discovery Development Process
Anuradha Acharya Chief Executive Officer, Ocimum Biosolutions
The promises of the human genome imply an order of magnitude increase in the number of targets to the sum total of all pharmaceutical research till date. Bioinformatics integration of genomic and proteomic data throughout the R&D process can enable identification and development of new drugs. For instance, Curagen's CG53135 progressed from discovery to the clinic in under 4 years. This means a great amount of savings compared to the $897 Million and a huge reduction in take taken to develop a drug using traditional means. This has fueled the drive to seek opportunities in drug discovery research as never before, it also helped break down domain barriers formally. The implications of such developments are two fold; one is increased understanding of underlying processes of drug discovery, second being complexity of sorting and making sense out of huge amounts of data these technologies generate.
Sorting meaningful data and interpreting it is time consuming task that may require researchers to focus too much on crunch work rather than addressing their primary questions through analysis. Bioinformatics, the new interdisciplinary science fills this vacuum nicely by way of providing tools and means of data management, analysis and interpretation to researchers to support their quest of understanding the drug discovery process in its entirety and helping the pharma industry to discover drugs like never before.
This presentation is built around a hypothetical drug discovery company and a case study is taken through various phases of drug discovery. Any contemporary life drug discovery company faces challenge of burgeoniong data and the need to find validated targets and other uses of IT at high pace. A synergy with Bioinformatics solution providers could lend a helping hand thus enabling a drug discovery company to focus on its core competencies, research, innovation and development. Various tools for biological data analysis and their relevance in growth and technological capability phase of a company are discussed with an overall perspective on information management in life sciences research.
Revisiting Genomic Data-mining: A Physicist's Perspective
Rajeev Gangal, Chief Scientific Officer, SciNova Informatics
Annotation of genomic data using sequence information has reached a plateau in terms of new ideas & methodologies. Structural data might shed more light on the biology of life but also is limited by domain expertise, cost of experimentation etc.
Perhaps the answer lies in revisiting genomic data analysis with tools like non-linear time series analysis, non-extensive thermodynamics and machine learning! The talk will focus on application of a physicist's toolbox to genomic data mining. Some case studies in protein fold recognition, drug target identification will be presented.
Addressing Key Life Sciences Challenges With Intel Architecture
Vijay Keshav, Industry Solutions Manager - Asia Pacific (High Performance Computing & Life Sciences) Intel Asia Electronics Inc.
This presentation touches upon three key challenges that Life Sciences researchers face with respect to their Information Technology needs and how they can be addressed.
Given the rapid growth in research around Life Sciences the big bottleneck today is managing Database size, Computational workloads and Application development to sustain a scalable and interoperable model moving forward.
We take a quick look at how Industry is responding to support these challenges with specific details on Intel's work around Life Sciences, based around moving to standards based Open systems architecture.
In the last section we touch upon the various compute models including the Grid based compute model which is required to tackle some of the big computational required for assembly, annotating and mapping Gene structure and then moving into more larger problems in areas of Protein Structures and Protein functions in Pathways. Examples of customers benefiting from such models are also highlighted.
Leveraging Off the Shelf Standard based Technology for Research
Stephen Wong, Systems Manager, Bioinformatics Institute
The BioInformatics Institute was conceived as a research and postgraduate training Institute within a multi-disciplinary framework. It recognizes the need to emphasize depth as well as breadth in all its activities to help lay a strong foundation for building a thriving biomedical R&D hub in Singapore. Its mission is to encourage, develop and support trained expertise with in-depth knowledge of biology and information technology, to advance biomedical research and development in Singapore. Bioinformatics Institute will be sharing their experience in technology investment and how it facilitates and enhances their research capabilities. In particular, Windows based high performance clusters on Dell servers will be explored and discussed.
The Global Outlook for Bio-IT in the Life Sciences and the Role Singapore Can Play
Daphne Chung, Senior Analyst, Regional Life Sciences and Healthcare Industries, IDC Asia/Pacific
This presentation will reveal how IDC sees the Life Science market currently and in the next five years and touch on the trends and more recent developments that impact the market. Singapore having identified the industry as a key economic sector for the future, the presentation will also focus on what has been happening in the market here and where IDC sees Singapore in the regional scheme of things.
Algorithms for Microarray Analysis
Dr Ramesh Hariharan, Chief Technical Officer, Strand Genomics Pte Ltd.
Gene Microarrays are becoming increasingly popular as platforms for high throughput gene expression studies. However, the analysis of microarray data remains a complex task because of the volume of data involved, the amount of noise in the data, and the consequent large number of false positives. Wrong analysis decisions can easily blow up the number of false positives by an order of magnitude, as this talk will illustrate.
To analyse a microarray experiment fruitfully, the Oligo Design step must have chosen the right oligos to put on the microarray, the Image Analysis step must have quantified the spots accurately, the right Normalization must have been performed on the arrays, and the right Statistical tests and algorithms applied. Errors in any one of these steps, when beyond limits, will cause the analysis to go haywire.
The talk will begin by showing examples of how errors in Oligo Design and Image Analysis lead to erroneous results. Next, the importance of Normalization and various algorithms for Normalization and Probe Aggregation (in case the array has multiple probes per gene as in an Affymetrix array) will be described, with reference to the Affymetrix Latin Square Spike-In Data Set. As we will show, some of the issues that arise in Probe Aggregation will point back at the accuracy of the Oligo Design process and algorithms for analysis will need to take this into account. The talk will also compare various algorithms for analysis of Affymetrix arrays with respect to their performance on the Latin Square data set.
The last part of the talk will briefly discuss pros and cons of various Clustering Algorithms for clustering profile data, and sketch the use of Supervised Learning methods to learn differences between classes of genes or samples.
All case studies described in the talk have come out of experience in building the recently released Oligo Design, Image Analysis and Statistical Data Analysis products at Strand. In particular, the somewhat tight coupling between Oligo Design and Data Analysis as illustrated in this talk is a result of having both products at hand and using one to explain results of the other.
Information Technology for Life Sciences
John E. Davies, Vice President, Sales and Marketing Group Director, Solutions Market Development Group, Intel
Several critical trends are driving the evolution of Life Sciences. The Pharmaceutical business model is changing from one dependent on a small number of huge-profit drugs, to inexpensive, rapidly developed, designer drugs. At the same time, the lines between pharmaceutical and biotechnology companies will continue to blur. Governments see Life Sciences industries as the next big economic wave and are pumping money into Research & Development. While Drug Discovery is in the limelight, AgBio, Biodefense, and Industrial/Environmental applications of bioscience are all growing. These trends are fueling dramatic grown in the role played by computing in Life Sciences, computing needs will surpass gains from Moore's Law.
Innovative Technologies for Next Generation Drugs
Santosh K. Mishra, Ph.D., Managing Director, Lilly Systems Biology Pte Ltd, Singapore
Novel technologies addressing views of complex biological systems in near molecular resolution have emerged with breakthroughs in ultrahigh throughput DNA sequencing, transcript (mRNA) profiling as whole genome biosensors and protein profiling strategies based in the core technologies of analytical chemistry and mass spectrometry.
The large volume of data generated through these technologies has necessitated the need to be able to manage, analyze, synthesize, distribute and generate coherent knowledge.
This presentation will review the Systems Biology approach to mine these databases such that the scientists would be able to accelerate the drug discovery and development process.
Integration of Biological Data, Chemical Data and Applications Across the Drug Discovery Value Chain
Dr Jason Theodosiou, Vice President Global Sales, LION bioscience AG
The biggest challenge in drug discovery today is data access and integration. The issue has become a major bottleneck to R&D productivity for most pharmaceutical and biotechnology companies.
The cause for this challenge lies in the fact that biomedical and chemical data, and applications are geographically highly dispersed, complex and heterogeneous in type and structure, and constantly changing.
This presentation will illustrate with case study examples of a comprehensive integration and decision support platform that optimises life sciences R&D.
Development, Uses and Future of Biological Databases for Bioinformatics Research
A/P Chia Tet Fatt, Director, Bioinformatics Research Centre (BIRC), Nanyang Technological University
Bioinformatics tools and applications are totally dependent on biological data. The consolidations of different biological data are normally grouped according to their nature of work and their unique characteristics. Hence, we have the creation of such databases like, genomic, proteomic, array-informatics and so forth. In our quest to understand and make connections on the biological implications, various molecular techniques and tools are used to decipher the genomic and proteomic databases. This resulted in the creation of an array of new databases. This explosion of biological information at various levels resulted in the creation of many new and exciting bioinformatics tools. However, the missing link that biologists are looking for is the integration of the existing databases into the workings of a society. In this paper, I will present a novel integrative approach towards the realization of such a dream.
Storage in Medical and Bio-IT
Ganesh Kumar Panadam, Technical Marketing Manager, Field Applications Engineering, Asia Pacific Sales & Marketing, Seagate
Imagine a doctor holding in his palmtop, the entire medical history of his patients. Imagine, a brain surgeon discussing with his colleagues images of a brain scan on his notebook computer... Imagine a researcher carrying around the entire genetic & medical references in a portable computer. Now stop imagining, because all these are possible where information is stored in electromagnetic devices called hard disc drives. Storage has become an important requirement for those who work around vast amount of data. Where there is a need for fast access, efficient and dependable medium of storing, hard discs have become the preferred storage. But choosing the right drive for your application is not necessarily a walk in the park.
Biological Pathway Discovery and Prediction of Drug Action Using Phenomic Computational Simulations
Kumar Selvarajoo, Director of Technology, Systome Therapeutics
The living organism consists of a myriad series of complex biological networks. Most research programmes in the past attempted to break down these networks into series of single step reaction and attempted to determine the 'crucial step' that is significantly dissimilar between the normal and disease condition. The drug discovery team then sends compounds to act at these steps to bring a cure to the disease.
Today, one can confidently say that the reason for the failure of such an approach to find cures for complex diseases such as diabetes, asthma and ischaemia is the failure of the method to take account of the interrelationships of biological systems.
We devised a novel computational simulation based on engineering principles that looks at biological networks, namely metabolic pathways in our case, as a system of interacting steps. Our model was first designed to predict and compare metabolic phenotypes of different cell types.
A glycolytic simulation was performed in proof-of-concept experiments to identify the various species-specific and environmentally regulated metabolic circuits. The initial results showed that simulations could be used as research tool to guide wet bench experimentation. Our results show systems-based computational simulations can be successfully exploited to accelerate drug dis-covery research and to elucidate complex metabolic phenotypes associated with human disease.
Protocol-Driven Research: Can Bio-Informatics Finally Become a Science?
Elia Stupka, Bioinformatic Program Manager, Temasek Life Science Laboratory
The fundamental principles of science and scientific publications are the ideas of protocols and clear reproducibility. Bioinformatics has often avoided these principles so far. It is only by taking them into account and making a science out of bioinformatics that the discipline can proceed successfully and scientist can focus again on research rather than technicalities. BioPipe is our attempt at bringing back these principles to bioinformatics by creating software that automates, tracks and records complex bioinformatics workflows.
Integrated Storage Management Strategies for High Availability of Mission-Critical Data in the Post Genomic Era
Alvin Ow, Regional Technical Consultant Manager, VERITAS Software Asia
Alvin Ow is the Regional Technical Consultant Manager (Pre-Sales) of VERITAS Software Asia South where he is responsible for providing consultancy and technical advice to customers on achieving the highest level of data availability for their organizations.
In this role, Alvin is instrumental in examining VERITAS Software's customers' requirements before coming up with customized solutions.
A graduate from the National University of Singapore with a Master of Science degree in Information and Computer Science, Alvin is an acknowledged authority in the area of Enterprise Data Availability.
Prior to his appointment with VERITAS Software, Alvin was the Assistant Manager of Presales Technical Support for ECS Computers Asia Pte Ltd. In the five years he served there, Alvin managed its pre- and post sales departments, and led a team of presales engineers to provide presales consulting for Sun Microsystems' platforms. He also contributed to a multiple of major deals during his tenure at ECS.
Computational Modelling Trends in BioEngineering: The Construction of the Physiome and it's Implications for Life Sciences
Dr. Nicolas Smith, Director, Modelling Cellular Function Program, Centre of Research Excellence, BioEngineering Institute, University of Auckland
An overview of new developments in integrative computational bioengineering and how these developments will impact basic science, medical diagnosis and drug discovery.